7 research outputs found
Predicting Player Engagement in Tom Clancy's The Division 2: A Multimodal Approach via Pixels and Gamepad Actions
This paper introduces a large scale multimodal corpus collected for the
purpose of analysing and predicting player engagement in commercial-standard
games. The corpus is solicited from 25 players of the action role-playing game
Tom Clancy's The Division 2, who annotated their level of engagement using a
time-continuous annotation tool. The cleaned and processed corpus presented in
this paper consists of nearly 20 hours of annotated gameplay videos accompanied
by logged gamepad actions. We report preliminary results on predicting
long-term player engagement based on in-game footage and game controller
actions using Convolutional Neural Network architectures. Results obtained
suggest we can predict the player engagement with up to 72% accuracy on average
(88% at best) when we fuse information from the game footage and the player's
controller input. Our findings validate the hypothesis that long-term (i.e. 1
hour of play) engagement can be predicted efficiently solely from pixels and
gamepad actions.Comment: 8 pages, accepted for publication and presentation at 2023 25th ACM
International Conference on Multimodal Interaction (ICMI
Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation
The laborious and costly nature of affect annotation is a key detrimental factor for obtaining large scale corpora with valid and reliable affect labels. Motivated by the lack of tools that can effectively determine an annotator's reliability, this paper proposes general quality assurance (QA) tests for real-time continuous annotation tasks. Assuming that the annotation tasks rely on stimuli with audiovisual components, such as videos, we propose and evaluate two QA tests: a visual and an auditory QA test. We validate the QA tool across 20 annotators that are asked to go through the test followed by a lengthy task of annotating the engagement of gameplay videos. Our findings suggest that the proposed QA tool reveals, unsurprisingly, that trained annotators are more reliable than the best of untrained crowdworkers we could employ. Importantly, the QA tool introduced can predict effectively the reliability of an affect annotator with 80% accuracy, thereby, saving on resources, effort and cost, and maximizing the reliability of labels solicited in affective corpora. The introduced QA tool is available and accessible through the PAGAN annotation platform
Play with emotion : affect-driven reinforcement learning
This paper introduces a paradigm shift by viewing
the task of affect modeling as a reinforcement learning (RL)
process. According to the proposed paradigm, RL agents learn
a policy (i.e. affective interaction) by attempting to maximize a
set of rewards (i.e. behavioral and affective patterns) via their
experience with their environment (i.e. context). Our hypothesis is
that RL is an effective paradigm for interweaving affect elicitation
and manifestation with behavioral and affective demonstrations.
Importantly, our second hypothesis—building on Damasio’s somatic
marker hypothesis—is that emotion can be the facilitator
of decision-making. We test our hypotheses in a racing game
by training Go-Blend agents to model human demonstrations
of arousal and behavior; Go-Blend is a modified version of the
Go-Explore algorithm which has recently showcased supreme
performance in hard exploration tasks. We first vary the arousalbased
reward function and observe agents that can effectively
display a palette of affect and behavioral patterns according to
the specified reward. Then we use arousal-based state selection
mechanisms in order to bias the strategies that Go-Blend explores.
Our findings suggest that Go-Blend not only is an efficient
affect modeling paradigm but, more importantly, affect-driven
RL improves exploration and yields higher performing agents,
validating Damasio’s hypothesis in the domain of games.peer-reviewe
Generative personas that behave and experience like humans
Using artificial intelligence (AI) to automatically test a game remains
a critical challenge for the development of richer and more
complex game worlds and for the advancement of AI at large. One
of the most promising methods for achieving that long-standing
goal is the use of generative AI agents, namely procedural personas,
that attempt to imitate particular playing behaviors which are represented
as rules, rewards, or human demonstrations. All research
efforts for building those generative agents, however, have focused
solely on playing behavior which is arguably a narrow perspective
of what a player actually does in a game. Motivated by this gap in
the existing state of the art, in this paper we extend the notion of
behavioral procedural personas to cater for player experience, thus
examining generative agents that can both behave and experience
their game as humans would. For that purpose, we employ the Go-
Explore reinforcement learning paradigm for training human-like
procedural personas, and we test our method on behavior and experience
demonstrations of more than 100 players of a racing game.
Our findings suggest that the generated agents exhibit distinctive
play styles and experience responses of the human personas they
were designed to imitate. Importantly, it also appears that experience,
which is tied to playing behavior, can be a highly informative
driver for better behavioral exploration.peer-reviewe